-
Notifications
You must be signed in to change notification settings - Fork 114
chore(ups): add benches #2878
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(ups): add benches #2878
Conversation
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
commit: |
}, | ||
// Per-iteration teardown: drop the per-iteration subscription to unsubscribe | ||
|_, mut sub| async move { | ||
drop(&mut sub); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The drop(&mut sub)
expression doesn't actually drop the subscriber value, only the mutable reference to it. This means the subscription will remain active since the actual subscriber isn't being dropped. To properly clean up resources and trigger unsubscription, this should be changed to drop(sub)
without the reference operator.
drop(&mut sub); | |
drop(sub); |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
9d59211
to
bf98d22
Compare
39ab6df
to
3586f1c
Compare
Claude encountered an error —— View job I'll analyze this and get back to you. |
Claude encountered an error —— View job I'll analyze this and get back to you. |
bf98d22
to
c9cff29
Compare
3586f1c
to
f68b17f
Compare
Claude encountered an error —— View job I'll analyze this and get back to you. |
Claude encountered an error —— View job I'll analyze this and get back to you. |
tokio::spawn(async move { | ||
// Use subscribe_and_wait_propagate to ensure subscription is ready | ||
let mut sub = subscribe_and_wait_propagate(&subscriber_clone, &subject_clone) | ||
.await | ||
.unwrap(); | ||
let _ = ready_tx.send(()); | ||
loop { | ||
match sub.next().await { | ||
std::result::Result::Ok(NextOutput::Message(msg)) => { | ||
let _ = msg.reply(&msg.payload).await; | ||
} | ||
_ => break, | ||
} | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spawned task in the request benchmark setup creates an infinite loop that continues running after the benchmark completes. This can lead to resource leakage as these background tasks accumulate across benchmark runs. Consider either:
- Capturing the
JoinHandle
fromtokio::spawn()
and aborting it during teardown - Using a channel or atomic flag to signal the task to exit gracefully when the benchmark completes
This ensures proper cleanup of resources between benchmark iterations and prevents potential memory/thread leaks during extended benchmark runs.
tokio::spawn(async move { | |
// Use subscribe_and_wait_propagate to ensure subscription is ready | |
let mut sub = subscribe_and_wait_propagate(&subscriber_clone, &subject_clone) | |
.await | |
.unwrap(); | |
let _ = ready_tx.send(()); | |
loop { | |
match sub.next().await { | |
std::result::Result::Ok(NextOutput::Message(msg)) => { | |
let _ = msg.reply(&msg.payload).await; | |
} | |
_ => break, | |
} | |
} | |
}); | |
let (shutdown_tx, mut shutdown_rx) = tokio::sync::oneshot::channel(); | |
let handle = tokio::spawn(async move { | |
// Use subscribe_and_wait_propagate to ensure subscription is ready | |
let mut sub = subscribe_and_wait_propagate(&subscriber_clone, &subject_clone) | |
.await | |
.unwrap(); | |
let _ = ready_tx.send(()); | |
loop { | |
tokio::select! { | |
msg_result = sub.next() => { | |
match msg_result { | |
std::result::Result::Ok(NextOutput::Message(msg)) => { | |
let _ = msg.reply(&msg.payload).await; | |
} | |
_ => break, | |
} | |
} | |
_ = &mut shutdown_rx => { | |
break; | |
} | |
} | |
} | |
}); | |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
let timeout_result = | ||
tokio::time::timeout(Duration::from_millis(10), guard.next()).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 10ms timeout creates a potential race condition in the benchmark. Under high load or network latency, legitimate messages might arrive after the timeout expires, causing the benchmark to undercount messages and potentially report inaccurate results. Consider either:
- Using a longer, more conservative timeout value (e.g., 100ms)
- Implementing a more deterministic approach that doesn't rely on timing
- Adding instrumentation to detect and report when messages might have been missed
This is particularly important for benchmarks that need to produce consistent, reproducible results across different environments.
let timeout_result = | |
tokio::time::timeout(Duration::from_millis(10), guard.next()).await; | |
let timeout_result = | |
tokio::time::timeout(Duration::from_millis(100), guard.next()).await; |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
c9cff29
to
a294c2e
Compare
Merge activity
|
Fixes RVT-5128